Section: New Results
Invertible Deep Networks
Participant: Edouard Oyallon (in collaboration with J.H. Jacobsen and A. Smeulders, Instituut voor Informatica)
It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show via a one-to-one mapping that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the